17 research outputs found
Signature Codes for a Noisy Adder Multiple Access Channel
In this work, we consider -ary signature codes of length and size
for a noisy adder multiple access channel. A signature code in this model has
the property that any subset of codewords can be uniquely reconstructed based
on any vector that is obtained from the sum (over integers) of these codewords.
We show that there exists an algorithm to construct a signature code of length
capable of correcting
errors at the channel output, where .
Furthermore, we present an explicit construction of signature codewords with
polynomial complexity being able to correct up to errors for a codeword length , where is a small non-negative
number. Moreover, we prove several non-existence results (converse bounds) for
-ary signature codes enabling error correction.Comment: 12 pages, 0 figures, submitted to 2022 IEEE Information Theory
Worksho
Information- and Coding-Theoretic Analysis of the RLWE Channel
Several cryptosystems based on the \emph{Ring Learning with Errors} (RLWE)
problem have been proposed within the NIST post-quantum cryptography
standardization process, e.g. NewHope. Furthermore, there are systems like
Kyber which are based on the closely related MLWE assumption. Both previously
mentioned schemes feature a non-zero decryption failure rate (DFR). The
combination of encryption and decryption for these kinds of algorithms can be
interpreted as data transmission over noisy channels. To the best of our
knowledge this paper is the first work that analyzes the capacity of this
channel. We show how to modify the encryption schemes such that the input
alphabets of the corresponding channels are increased. In particular, we
present lower bounds on their capacities which show that the transmission rate
can be significantly increased compared to standard proposals in the
literature. Furthermore, under the common assumption of stochastically
independent coefficient failures, we give lower bounds on achievable rates
based on both the Gilbert-Varshamov bound and concrete code constructions using
BCH codes. By means of our constructions, we can either increase the total
bitrate (by a factor of for Kyber and by factor of for NewHope)
while guaranteeing the same \emph{decryption failure rate} (DFR). Moreover, for
the same bitrate, we can significantly reduce the DFR for all schemes
considered in this work (e.g., for NewHope from to ).Comment: 13 pages, 4 figures, 3 table
Analysis of Communication Channels Related to Physical Unclonable Functions
Cryptographic algorithms rely on the secrecy of their corresponding keys. On embedded systems with standard CMOS chips, where
secure permanent memory such as flash is not available as a key storage, the secret key can be derived from Physical Unclonable Functions
(PUFs) that make use of minuscule manufacturing variations of, for instance, SRAM cells. Since PUFs are affected by environmental changes,
the reliable reproduction of the PUF key requires error correction. For
silicon PUFs with binary output, errors occur in the form of bitflips
within the PUF response. Modeling the channel as a Binary Symmetric
Channel (BSC) with fixed crossover probability p is only a first-order
approximation of the real behavior of the PUF response. We propose a
more realistic channel model, referred to as the Varying Binary Symmetric Channel (VBSC), which takes into account that the reliability of
different PUF response bits may not be equal. We investigate its channel
capacity for various scenarios which differ in the channel state information (CSI) present at encoder and decoder. We compare the capacity
results for the VBSC for the different CSI cases with reference to the
distribution of the bitflip probability according to a work by Maes et al
FuLeeca: A Lee-based Signature Scheme
In this work we introduce a new code-based signature scheme, called \textsf{FuLeeca}, based on the NP-hard problem of finding codewords of given Lee-weight. The scheme follows the Hash-and-Sign approach applied to quasi-cyclic codes. Similar approaches in the Hamming metric have suffered statistical attacks, which revealed the small support of the secret basis. Using the Lee metric, we are able to thwart such attacks. We use existing hardness results on the underlying problem and study adapted statistical attacks. We propose parameters for \textsf{FuLeeca}~and compare them to an extensive list of proposed post-quantum secure signature schemes including the ones already standardized by NIST. This comparison reveals that \textsf{FuLeeca}~is competitive. For example, for NIST category I, i.e., 160 bit of classical security, we obtain an average signature size of 1100 bytes and public key sizes of 1318 bytes. Comparing the total communication cost, i.e., the sum of the signature and public key size, we see that \textsf{FuLeeca} is only outperformed by Falcon while the other standardized schemes Dilithium and SPHINCS+ show larger communication costs than \textsf{FuLeeca}
Analysis of Communication Channels Related to Physical Unclonable Functions
Cryptographic algorithms rely on the secrecy of their corresponding keys. On embedded systems with standard CMOS chips, where
secure permanent memory such as flash is not available as a key storage, the secret key can be derived from Physical Unclonable Functions
(PUFs) that make use of minuscule manufacturing variations of, for instance, SRAM cells. Since PUFs are affected by environmental changes,
the reliable reproduction of the PUF key requires error correction. For
silicon PUFs with binary output, errors occur in the form of bitflips
within the PUF response. Modeling the channel as a Binary Symmetric
Channel (BSC) with fixed crossover probability p is only a first-order
approximation of the real behavior of the PUF response. We propose a
more realistic channel model, referred to as the Varying Binary Symmetric Channel (VBSC), which takes into account that the reliability of
different PUF response bits may not be equal. We investigate its channel
capacity for various scenarios which differ in the channel state information (CSI) present at encoder and decoder. We compare the capacity
results for the VBSC for the different CSI cases with reference to the
distribution of the bitflip probability according to a work by Maes et al
Higher rates and information-theoretic analysis for the RLWE channel
The Learning with Errors (LWE) problem is considered to be a hard problem and lies the foundation of various cryptographic algorithms. Several cryptosystems based on the closely related Ring Learning with Errors (RLWE) problem have been proposed within the NIST PQC standardization process, e.g., the systems LAC and NewHope. The combination of encryption and decryption for these kinds of algorithms can be interpreted as data transmission over noisy channels. To the best of our knowledge this paper is the first work that analyzes the capacity of this channel. We extend this channel from binary to q-ary alphabets and show that this does not compromise the security of the related RLWE-based schemes if appropriate error correcting codes are used to prevent the decryption failure rate (DFR) from increasing. We give a lower bound on the capacity of this channel showing that the achievable asymptotic rates are substantially (5.7 times for LAC and 10.7 times for NewHope) higher than the currently deployed ones for the finite length regime. Furthermore, under the assumption of stochastically independent coefficient failures, we show that substantially higher rates can also be achieved in the finite length setting by using the Gilbert-Varshamov bound. Moreover, we give explicit code constructions increasing the achievable rate by a factor of 2 for LAC and a factor of 7 for NewHope without increasing the DFR for the respective parameter sets achieving a security level equivalent to AES256